Unifying Probability and Logic for Learning
نویسندگان
چکیده
1 Uncertain knowledge can be modeled by using graded probabilities rather than binary truth-values, but so far a completely satisfactory integration of logic and probability has been lacking. In particular the inability of confirming universal hypotheses has plagued most if not all systems so far. We address this problem head on. The main technical problem to be discussed is the following: Given a set of sentences, each having some probability of being true, what probability should be ascribed to other (query) sentences? A natural wish-list, among others, is that the probability distribution (i) is consistent with the knowledge base, (ii) allows for a consistent inference procedure and in particular (iii) reduces to deductive logic in the limit of probabilities being 0 and 1, (iv) allows (Bayesian) inductive reasoning and (v) learning in the limit and in particular (vi) allows confirmation of universally quantified hypotheses/sentences. We show that probabilities satisfying (i)-(vi) exist, and present necessary and sufficient conditions (Gaifman and Cournot). The theory is a step towards a globally consistent and empirically satisfactory unification of probability and logic.
منابع مشابه
Unifying Logic and Probability: A New Dawn for AI?
Logic and probability theory are two of the most important branches of mathematics and each has played a significant role in artificial intelligence (AI) research. Beginning with Leibniz, scholars have attempted to unify logic and probability. For “classical” AI, based largely on first-order logic, the purpose of such a unification is to handle uncertainty and facilitate learning from real data...
متن کاملKnowledge and Data Fusion in Probabilistic Networks
Probability theory provides the theoretical basis for a logically coherent process of combining prior knowledge with empirical data to draw plausible inferences and to refine theories as observations accrue. Increases in the expressive power of languages for expressing probabilistic theories have been accompanied by refinements and adaptations of Bayesian learning methods to handle the more exp...
متن کاملStatistical Relational Learning for Knowledge Extraction from the Web
Extracting knowledge from unstructured text has been a long-standing goal of NLP. The advent of the Web further increases its urgency by making available billions of online documents. To represent the acquired knowledge that is complex and heterogeneous, we need first-order logic. To handle the inherent uncertainty and ambiguity in extracting and reasoning with knowledge, we need probability. C...
متن کامل1 Markov Logic: A Unifying Framework for Statistical Relational Learning
Interest in statistical relational learning (SRL) has grown rapidly in recent years. Several key SRL tasks have been identified, and a large number of approaches have been proposed. Increasingly, a unifying framework is needed to facilitate transfer of knowledge across tasks and approaches, to compare approaches, and to help bring structure to the field. We propose Markov logic as such a framew...
متن کاملUnifying Logic and Probability: Recent Developments
Logic and probability theory are two of the most important branches of mathematics and each has played a significant role in artificial intelligence (AI) research. Beginning with Leibniz, scholars have attempted to unify logic and probability. For “classical” AI, based largely on first-order logic, the purpose of such a unification is to handle uncertainty and facilitate learning from real data...
متن کامل